Goto

Collaborating Authors

 Schleicher County


SyGra: A Unified Graph-Based Framework for Scalable Generation, Quality Tagging, and Management of Synthetic Data

Pradhan, Bidyapati, Dasgupta, Surajit, Saha, Amit Kumar, Anustoop, Omkar, Puttagunta, Sriram, Mittal, Vipul, Sarda, Gopal

arXiv.org Artificial Intelligence

The advancement of large language models (LLMs) is critically dependent on the availability of high-quality datasets for Supervised Fine-Tuning (SFT), alignment tasks like Direct Preference Optimization (DPO), etc. In this work, we present a comprehensive synthetic data generation framework that facilitates scalable, configurable, and high-fidelity generation of synthetic data tailored for these training paradigms. Our approach employs a modular and configuration-based pipeline capable of modeling complex dialogue flows with minimal manual intervention. This framework uses a dual-stage quality tagging mechanism, combining heuristic rules and LLM-based evaluations, to automatically filter and score data extracted from OASST-formatted conversations, ensuring the curation of high-quality dialogue samples. The resulting datasets are structured under a flexible schema supporting both SFT and DPO use cases, enabling seamless integration into diverse training workflows. Together, these innovations offer a robust solution for generating and managing synthetic conversational data at scale, significantly reducing the overhead of data preparation in LLM training pipelines.


A Novel Multimodal RUL Framework for Remaining Useful Life Estimation with Layer-wise Explanations

Razzaq, Waleed, Zhao, Yun-Bo

arXiv.org Artificial Intelligence

Estimating the Remaining Useful Life (RUL) of mechanical systems is pivotal in Prognostics and Health Management (PHM). Rolling-element bearings are among the most frequent causes of machinery failure, highlighting the need for robust RUL estimation methods. Existing approaches often suffer from poor generalization, lack of robustness, high data demands, and limited interpretability. This paper proposes a novel multimodal-RUL framework that jointly leverages image representations (ImR) and time-frequency representations (TFR) of multichannel, nonstationary vibration signals. The architecture comprises three branches: (1) an ImR branch and (2) a TFR branch, both employing multiple dilated convolutional blocks with residual connections to extract spatial degradation features; and (3) a fusion branch that concatenates these features and feeds them into an LSTM to model temporal degradation patterns. A multi-head attention mechanism subsequently emphasizes salient features, followed by linear layers for final RUL regression. To enable effective multimodal learning, vibration signals are converted into ImR via the Bresenham line algorithm and into TFR using Continuous Wavelet Transform. We also introduce multimodal Layer-wise Relevance Propagation (multimodal-LRP), a tailored explainability technique that significantly enhances model transparency. The approach is validated on the XJTU-SY and PRONOSTIA benchmark datasets. Results show that our method matches or surpasses state-of-the-art baselines under both seen and unseen operating conditions, while requiring ~28 % less training data on XJTU-SY and ~48 % less on PRONOSTIA. The model exhibits strong noise resilience, and multimodal-LRP visualizations confirm the interpretability and trustworthiness of predictions, making the framework highly suitable for real-world industrial deployment.


Mesh RAG: Retrieval Augmentation for Autoregressive Mesh Generation

Sun, Xiatao, Liang, Chen, Wang, Qian, Rakita, Daniel

arXiv.org Artificial Intelligence

3D meshes are a critical building block for applications ranging from industrial design and gaming to simulation and robotics. Traditionally, meshes are crafted manually by artists, a process that is time-intensive and difficult to scale. To automate and accelerate this asset creation, autoregressive models have emerged as a powerful paradigm for artistic mesh generation. However, current methods to enhance quality typically rely on larger models or longer sequences that result in longer generation time, and their inherent sequential nature imposes a severe quality-speed trade-off. This sequential dependency also significantly complicates incremental editing. To overcome these limitations, we propose Mesh RAG, a novel, training-free, plug-and-play framework for autoregressive mesh generation models. Inspired by RAG for language models, our approach augments the generation process by leveraging point cloud segmentation, spatial transformation, and point cloud registration to retrieve, generate, and integrate mesh components. This retrieval-based approach decouples generation from its strict sequential dependency, facilitating efficient and parallelizable inference. We demonstrate the wide applicability of Mesh RAG across various foundational autoregressive mesh generation models, showing it significantly enhances mesh quality, accelerates generation speed compared to sequential part prediction, and enables incremental editing, all without model retraining.


B-Rep Distance Functions (BR-DF): How to Represent a B-Rep Model by Volumetric Distance Functions?

Zhang, Fuyang, Jayaraman, Pradeep Kumar, Xu, Xiang, Furukawa, Yasutaka

arXiv.org Artificial Intelligence

This paper presents a novel geometric representation for CAD Boundary Representation (B-Rep) based on volumetric distance functions, dubbed B-Rep Distance Functions (BR-DF). BR-DF encodes the surface mesh geometry of a CAD model as signed distance function (SDF). B-Rep vertices, edges, faces and their topology information are encoded as per-face unsigned distance functions (UDFs). An extension of the Marching Cubes algorithm converts BR-DF directly into watertight CAD B-Rep model (strictly speaking a faceted B-Rep model). A surprising characteristic of BR-DF is that this conversion process never fails. Leveraging the volumetric nature of BR-DF, we propose a multi-branch latent diffusion with 3D U-Net backbone for jointly generating the SDF and per-face UDFs of a BR-DF model. Our approach achieves comparable CAD generation performance against SOTA methods while reaching the unprecedented 100% success rate in producing (faceted) B-Rep models.


DocLens : A Tool-Augmented Multi-Agent Framework for Long Visual Document Understanding

Zhu, Dawei, Meng, Rui, Chen, Jiefeng, Li, Sujian, Pfister, Tomas, Yoon, Jinsung

arXiv.org Artificial Intelligence

Comprehending long visual documents, where information is distributed across extensive pages of text and visual elements, is a critical but challenging task for modern Vision-Language Models (VLMs). Existing approaches falter on a fundamental challenge: evidence localization. They struggle to retrieve relevant pages and overlook fine-grained details within visual elements, leading to limited performance and model hallucination. To address this, we propose DocLens, a tool-augmented multi-agent framework that effectively ``zooms in'' on evidence like a lens. It first navigates from the full document to specific visual elements on relevant pages, then employs a sampling-adjudication mechanism to generate a single, reliable answer. Paired with Gemini-2.5-Pro, DocLens achieves state-of-the-art performance on MMLongBench-Doc and FinRAGBench-V, surpassing even human experts. The framework's superiority is particularly evident on vision-centric and unanswerable queries, demonstrating the power of its enhanced localization capabilities.


ScrewSplat: An End-to-End Method for Articulated Object Recognition

Kim, Seungyeon, Ha, Junsu, Kim, Young Hun, Lee, Yonghyeon, Park, Frank C.

arXiv.org Artificial Intelligence

Figure 1: Articulated object recognition by splatting screw axes and Gaussians. Articulated objects with movable parts - such as doors, laptops, and drawers - are common in everyday environments, and manipulating them requires understanding both their 3D geometry and underlying kinematic structure (e.g., joint types and axes). While prior work has addressed this using large-scale datasets of 3D objects with annotated joint axes in supervised settings [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], such methods struggle to generalize to unseen categories - a natural limitation of supervised learning. In this work, we tackle a more challenging yet practical scenario: inferring kinematic structure directly from multi-view RGB images under varying object configurations, without relying on category-specific supervision (see the left of Figure 1). Spurred in part by the success of neural rendering-based 3D reconstruction methods that require no supervised training [12, 13, 14, 15], recent works have adapted these frameworks for articulated object recognition [16, 17, 18, 19, 20], achieving promising results using raw RGB observations. However, a key drawback of these methods lies in their reliance on strong assumptions, such as a known number of articulated components or predefined joint types.


Machines Learn Number Fields, But How? The Case of Galois Groups

Lee, Kyu-Hwan, Lee, Seewoo

arXiv.org Artificial Intelligence

By applying interpretable machine learning methods such as decision trees, we study how simple models can classify the Galois groups of Galois extensions over $\mathbb{Q}$ of degrees 4, 6, 8, 9, and 10, using Dedekind zeta coefficients. Our interpretation of the machine learning results allows us to understand how the distribution of zeta coefficients depends on the Galois group, and to prove new criteria for classifying the Galois groups of these extensions. Combined with previous results, this work provides another example of a new paradigm in mathematical research driven by machine learning.


Forward Reverse Kernel Regression for the Schrödinger bridge problem

Belomestny, Denis, Schoenmakers, John.

arXiv.org Machine Learning

In this paper, we study the Schrödinger Bridge Problem (SBP), which is central to entropic optimal transport. For general reference processes and begin-endpoint distributions, we propose a forward-reverse iterative Monte Carlo procedure to approximate the Schrödinger potentials in a nonparametric way. In particular, we use kernel based Monte Carlo regression in the context of Picard iteration of a corresponding fixed point problem as considered in [5]. By preserving in the iteration positivity and contractivity in a Hilbert metric sense, we develop a provably convergent algorithm. Furthermore, we provide convergence rates for the potential estimates and prove their optimality. Finally, as an application, we propose a non-nested Monte Carlo procedure for the final dimensional distributions of the Schrödinger Bridge process, based on the constructed potentials and the forward-reverse simulation method for conditional diffusions developed in [2].


Squeeze3D: Your 3D Generation Model is Secretly an Extreme Neural Compressor

Dagli, Rishit, Guan, Yushi, Durvasula, Sankeerth, Mofayezi, Mohammadreza, Vijaykumar, Nandita

arXiv.org Artificial Intelligence

We propose Squeeze3D, a novel framework that leverages implicit prior knowledge learnt by existing pre-trained 3D generative models to compress 3D data at extremely high compression ratios. Our approach bridges the latent spaces between a pre-trained encoder and a pre-trained generation model through trainable mapping networks. Any 3D model represented as a mesh, point cloud, or a radiance field is first encoded by the pre-trained encoder and then transformed (i.e. compressed) into a highly compact latent code. This latent code can effectively be used as an extremely compressed representation of the mesh or point cloud. A mapping network transforms the compressed latent code into the latent space of a powerful generative model, which is then conditioned to recreate the original 3D model (i.e. decompression). Squeeze3D is trained entirely on generated synthetic data and does not require any 3D datasets. The Squeeze3D architecture can be flexibly used with existing pre-trained 3D encoders and existing generative models. It can flexibly support different formats, including meshes, point clouds, and radiance fields. Our experiments demonstrate that Squeeze3D achieves compression ratios of up to 2187x for textured meshes, 55x for point clouds, and 619x for radiance fields while maintaining visual quality comparable to many existing methods. Squeeze3D only incurs a small compression and decompression latency since it does not involve training object-specific networks to compress an object.


OmniIndoor3D: Comprehensive Indoor 3D Reconstruction

Wei, Xiaobao, Zhang, Xiaoan, Wang, Hao, Wuwu, Qingpo, Lu, Ming, Zheng, Wenzhao, Zhang, Shanghang

arXiv.org Artificial Intelligence

We propose a novel framework for comprehensive indoor 3D reconstruction using Gaussian representations, called OmniIndoor3D. This framework enables accurate appearance, geometry, and panoptic reconstruction of diverse indoor scenes captured by a consumer-level RGB-D camera. Since 3DGS is primarily optimized for photorealistic rendering, it lacks the precise geometry critical for high-quality panoptic reconstruction. Therefore, OmniIndoor3D first combines multiple RGB-D images to create a coarse 3D reconstruction, which is then used to initialize the 3D Gaussians and guide the 3DGS training. To decouple the optimization conflict between appearance and geometry, we introduce a lightweight MLP that adjusts the geometric properties of 3D Gaussians. The introduced lightweight MLP serves as a low-pass filter for geometry reconstruction and significantly reduces noise in indoor scenes. To improve the distribution of Gaussian primitives, we propose a densification strategy guided by panoptic priors to encourage smoothness on planar surfaces. Through the joint optimization of appearance, geometry, and panoptic reconstruction, OmniIndoor3D provides comprehensive 3D indoor scene understanding, which facilitates accurate and robust robotic navigation. We perform thorough evaluations across multiple datasets, and OmniIndoor3D achieves state-of-the-art results in appearance, geometry, and panoptic reconstruction. We believe our work bridges a critical gap in indoor 3D reconstruction. The code will be released at: https://ucwxb.github.io/OmniIndoor3D/